57 research outputs found

    BB-RTE: a Budget-Based RunTime Engine for Mixed and Safety Critical Systems

    Get PDF
    International audienceThe safety critical industry is considering a shift from single-core COTS to multi-core COTS processor for safety and time critical computers in order to maximize performance while reducing costs.In a domain where time predictability is a major concern due to the regulation standards, multi-core processors are introducing new sources of time variations due to the electronic competition when the software is accessing shared hardware resources, and characterized by timing interference.The solutions proposed in the literature to deal with timing interference are all proposing a trade-off between performance efficiency, time predictability and intrusiveness in the software. Especially, none of them is able to fully exploit the multi-core efficiency while allowing untouched, already-certified legacy software to run.In this paper, we introduce and evaluate BB-RTE, a Budget-Based RunTime Engine for Mixed and Safety Critical Systems, that especially focuses on mixed critical systems. BB-RTE guarantees the deadline of high-critical tasks 1) by computing for each shared hardware resource a budget in terms of extra accesses that the critical tasks can support before their runtime is significantly impacted; 2) by temporarily suspending low-critical tasks at runtime once this budget as been consumed

    METrICS: a Measurement Environment For Multi-Core Time Critical Systems

    Get PDF
    International audienceWith the upcoming shift from single-core to multi-core COTS processor for safety critical products such as avionics, railway or space computer subsystems, the safety critical industry is facing a trade-off in term of performance versus predictability.In multi-core processors, concurrent accesses to shared hardware resources are generating inter-task or inter-application timing interference, breaking the timing isolation principles required by the standards for such critical software. Several solutions have been proposed in the literature to control or regulate these timing interferences, but most of these solutions require to perform some level of profiling, monitoring or dimensioning.As time-critical software is running on top of Real Time Operating Systems (ROTS), classical profiling techniques relying on interrupts, multi-threading, or OS modules are either not available or prohibited for predictability, safety or security reasons.In this paper we present METrICS, a measurement environment for multi-core time-critical systems running on top of the industry-standard PikeOS RTOS. Our framework proposes an accurate real-time runtime and resource usage measurement while having a negligible impact on timing behaviour, allowing us to fully observe and characterize timing interference.Beyond being able to characterize timing interference, we evaluated METrICS in term of accuracy of the timing and resource usage measurements, intrusiveness both in term of timing and impact on the legacy code, as well as adherence to the hardware. We also present a portfolio of the kind of measurements METrICS provides

    Archexplorer for automatic design space exploration

    Get PDF
    Growing architectural complexity and stringent time-to-market constraints suggest the need to move architecture design beyond parametric exploration to structural exploration. ArchExplorer is a Web-based permanent and open design-space exploration framework that lets researchers compare their designs against others. The authors demonstrate their approach by exploring the design space of an on-chip memory subsystem and a multicore processor.Postprint (published version

    Studying co-running avionic real-time applications on multi-core COTS architectures

    Get PDF
    International audienceFor the last decades, industries from the safety-critical domain have been using Commercial Off-The-Shelf (COTS) architectures despite their inherent runtime variability. To guarantee hard real-time constraints in such systems, designers massively relied on resource over-provisioning and disabling the features responsible for runtime variability. The recent shift to multi-core architectures in the embedded COTS market worsened the runtime variability problem as contention on shared hardware resources brought new variability sources. Additionally, hiding this variability in additional safety margins as performed in the past will offset most if not all the multi-core performance gains. To enable the use of multi-cores in this domain, it has become essential to finely characterize at system level the application workload, as well as the possible contention on shared hardware resources. In this paper, we introduce measurement techniques based on a set of dedicated stressing benchmarks and architecture hardware monitors to characterize (1) the architecture, by identifying the shared hardware resources and their associated contention mechanisms. (2) the application, by identifying which shared hardware resources it is sensitive to. Such information would guide us toward identifying which applications can run smoothly together without endangering individual worst-case execution times

    Tracing Hardware Monitors in the GR712RC Multicore Platform: Challenges and Lessons Learnt from a Space Case Study

    Get PDF
    The demand for increased computing performance is driving industry in critical-embedded systems (CES) domains, e.g. space, towards the use of multicores processors. Multicores, however, pose several challenges that must be addressed before their safe adoption in critical embedded domains. One of the prominent challenges is software timing analysis, a fundamental step in the verification and validation process. Monitoring and profiling solutions, traditionally used for debugging and optimization, are increasingly exploited for software timing in multicores. In particular, hardware event monitors related to requests to shared hardware resources are building block to assess and restraining multicore interference. Modern timing analysis techniques build on event monitors to track and control the contention tasks can generate each other in a multicore platform. In this paper we look into the hardware profiling problem from an industrial perspective and address both methodological and practical problems when monitoring a multicore application. We assess pros and cons of several profiling and tracing solutions, showing that several aspects need to be taken into account while considering the appropriate mechanism to collect and extract the profiling information from a multicore COTS platform. We address the profiling problem on a representative COTS platform for the aerospace domain to find that the availability of directly-accessible hardware counters is not a given, and it may be necessary to the develop specific tools that capture the needs of both the user’s and the timing analysis technique requirements. We report challenges in developing an event monitor tracing tool that works for bare-metal and RTEMS configurations and show the accuracy of the developed tool-set in profiling a real aerospace application. We also show how the profiling tools can be exploited, together with handcrafted benchmarks, to characterize the application behavior in terms of multicore timing interference.This work has been partially supported by a collaboration agreement between Thales Research and the Barcelona Supercomputing Center, and the European Research Council (ERC) under the EU’s Horizon 2020 research and innovation programme (grant agreement No. 772773). MINECO partially supported Jaume Abella under Ramon y Cajal postdoctoral fellowship (RYC2013-14717).Peer ReviewedPostprint (published version

    The next convergence: High-performance and mission-critical markets

    Get PDF
    The well-known convergence of the high-performance computing and the mobile markets has been a dominating factor in the computing market during the last two decades. In this paper we witness a new type of convergence between the mission-critical market (such as avionic or automotive) and the mainstream consumer electronics market. Such convergence is fuelled by the common needs of both markets for more reliability, support for mission-critical functionalities and the challenge of harnessing the unsustainable increases in safety margins to guarantee either correctness or timing. In this position paper, we present a description of this new convergence, as well as the main challenges and opportunities that it brings to computing industry.Peer ReviewedPostprint (published version

    Facilitating the Exploration of Compositions of Program Transformations

    Get PDF
    Static cost models have a hard time coping with hardware components exhibiting complex run-time behaviors, calling for alternative solutions. Iterative optimization is emerging as a promising research direction, but currently, it is mostly limited to finding the parameters of program transformations. We want to extend the scope and efficiency of iterative optimization techniques by searching not only for the appropriate parameters of a given transformation, but for the program transformations themselves, and especially for compositions of program transformations. The purpose of this article is to introduce a framework for easily expressing compositions of program transformations. This framework relies on a unified polyhedral representation of loops and statements. The key is to clearly separate the impact of each program transformation on the following three components: the iteration domain, the schedule and the memory access functions. We show that, within this framework, composing a long sequence of program transformations induces no code size explosion. As a result, searching for compositions of transformations is not hampered by the multiplicity of compositions, and in many cases, it is equivalent to testing different values for the coefficients of the representation matrices. Our techniques have been implemented on top of the Open64/ORC compiler

    ArchExplorer.org: Joint Compiler/Hardware Exploration for Fair Comparison of Architectures

    No full text
    While reproducing experimental results of research articles is standard practice in mature domains of science, such as physics or biology, it has not yet become mainstream in computer architecture. However, recent research shows that the lack of a fair and broad comparison of research ideas can be significantly detrimental to the progress, and thus the productivity, of research. At the same time, the complexity of architecture simulators and the fact that simulators are not systematically disseminated with novel ideas are largely responsible for this situation. While this methodology has a fundamental impact on research, it is by essence a practical issue. In this article, we present and put to task an atypical approach which aims at overcoming this practical methodology issue, and which takes the form of an open and continuous exploration through a server-side web infrastructure. First, rather than requiring from a researcher to engage in the daunting task of seeking, installing and running the simulators of many alternative mechanisms, we propose that researchers upload their simulator to the infrastructure, where the corresponding mechanism is automatically compared against all known ideas so far. Second, the comparison takes the form of a broad compiler/hardware exploration, so that a new mechanism is deemed superior only if it can outperform a tuned baseline and all known tuned mechanisms, for a given area and/or power budget. These two principles considerably facilitate a fair and quantitative comparison of research ideas. The web infrastructure is now publicly open, and we put the overall approach to task with a set of data cache mechanisms. We explain how the tools and methodological issues of contributed simulators can be overcome, and we show that this broad exploration can challenge some earlier assessments about data cache research. 1
    • …
    corecore